Extending Human Interaction and Robot Rescue Designs
نویسندگان
چکیده
The sensors include sonar, cameras, keyboard, the dead reckoning position and a microphone. The drivers for using these sensors, as well as a library for writing behaviors, were provided with the ActivMedia robot. We produced a map of the competition areas in advance using a tape measure. On this map we drew a graph of places that the robot could go from any given place. This map was not only used to visually direct people from one place to another at the conference center, but it was also used for localization purposes. At the next level, the interpretation level, the programs interpret the sensors to provide high-level assessments about the environment and intentions of the person with whom the robot is interacting. For example, if the vision component detects a face in the scene, it reports the person’s location and name, based on his nametag. The intentions of the patron can also be determined from dialog. For example, the robot can terminate a conversation if a patron says “goodbye.” At the top level is the control aspect (which is basically a finite state machine). If the robot detects someone in front of it then it starts dialog with him. Our system can detect a person’s presence if he speaks, types, or if his face is being tracked by the camera. If a person is not detected by the robot then it will wander around to various points on our map of the competition area searching for people, all the while the robot is keeping track of its location (using Monte Carlo Localization, which is discussed later). Should it find a person, then it will continue driving toward him until it successfully reaches him, or, if unsuccessful, it will go back to wandering to various points on the map. When a person interacts with the robot, many things can happen. Immediately, the robot starts trying to find the person’s nametag. Once it has attained that information it uses the information when speaking or displaying other information on the screen. At the same time, the robot continues to track the patron’s face with the higher camera. Anytime the patron speaks to it, Mabel parses the words using Sphinx speech recognition then uses a Bayes net to decide what the person’s intentions are when talking to it. rtificial Intelligence (AI) has at its core the creation of intelligent systems. The holy grail of this entire subject is to create a general form of intelligence that can reason, learn and interact intelligently in its environment much as a human does. However, the solution to this problem has turned out to be more difficult than anyone could have imagined. In this project, therefore, we have tried to move toward a better understanding of what it takes to create a truly intelligent system. Mabel (the Mobile Table) is a robotic system developed primarily by undergraduates at the University of Rochester that can perform waypoint navigation, speech generation, speech recognition, natural language understanding, face finding, face following, nametag reading, and robot localization. Mabel uses these components of intelligence to interact with its environment in two distinct ways. It can act as a robot host at a conference by giving people information about the conference schedule. Mabel can also act as a search and rescue robot by entering a mock disaster scene, locating mock human victims, mapping the victims’ location, and returning safely out of the scene. Mabel was the winner of the robot host event and tied for third place in the robot search and rescue event at the 2003 International Joint Conference on Artificial Intelligence (IJCAI) in Acapulco, Mexico. The overall design philosophy for Mabel was to integrate human interaction with robotic control. We achieved this goal by using an interactive speech system, a pan/tilt/zoom camera that actively follows faces, and another pan/tilt/ zoom camera that reads patrons’ nametags. In this paper, we present the changes that have occurred since the 2002 American Association for Artificial Intelligence (AAAI) conference and the methods that we used to integrate the hardware and software into a usable system.
منابع مشابه
Mabel: Extending Human Interaction and Robot Rescue Designs
Mabel (the Mobile Table) is a robotic system that can perform waypoint navigation, speech generation, speech recognition, natural language understanding, face finding, face following, nametag reading, and localization. Mabel can interact intelligently to give information about the conference to patrons. Major additions to this year’s design are Monte Carlo Localization, Filter-Cascade technique...
متن کاملAn Experimental Study on Blinking and Eye Movement Detection via EEG Signals for Human-Robot Interaction Purposes Based on a Spherical 2-DOF Parallel Robot
Blinking and eye movement are one of the most important abilities that most people have, even people with spinal cord problem. By using this ability these people could handle some of their activities such as moving their wheelchair without the help of others. One of the most important fields in Human-Robot Interaction is the development of artificial limbs working with brain signals. The purpos...
متن کاملEvolving interface design for robot search tasks
This paper describes two steps in the evolution of human-robot interaction designs developed by the University of Massachusetts Lowell (UML) and the Idaho National Laboratory (INL) to support urban search and rescue tasks. We conducted usability tests to compare the two interfaces, one of which emphasized three-dimensional mapping while the other design emphasized the video feed. We found that ...
متن کاملMabel: Extending Human Interaction and Robot Rescue Design
Mabel (the Mobile Table) is a robotic system that can perform waypoint navigation, speech generation, speech recognition, natural language understanding, face finding, face following, nametag reading, and localization. Mabel can interact intelligently to give information about the conference to patrons. Major additions to this year’s design are Monte Carlo Localization, Filter-Cascade technique...
متن کاملA Game Engine Based Simulation of the Nist Urban Search and Rescue Arenas
We are developing interactive simulations of the National Institute of Standards and Technology (NIST) Reference Test Facility for Autonomous Mobile Robots (Urban Search and Rescue). The NIST USAR Test Facility is a standardized disaster environment consisting of three scenarios of progressive difficulty: Yellow, Orange, and Red arenas. The USAR task focuses on robot behaviors, and physical int...
متن کاملA Game Engine Based Simulation of the Nist Urban Search & Rescue Arenas
We are developing interactive simulations of the National Institute of Standards and Technology (NIST) Reference Test Facility for Autonomous Mobile Robots (Urban Search and Rescue). The NIST USAR Test Facility is a standardized disaster environment consisting of three scenarios of progressive difficulty: Yellow, Orange, and Red arenas. The USAR task focuses on robot behaviors, and physical int...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2004